Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
1.
Am J Epidemiol ; 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38576172

RESUMEN

How do we construct our causal DAGs, e.g. for life course modelling and analysis? In this commentary I review how the data-driven construction of causal DAGs (causal discovery) has evolved, what promises it holds and what limitations or caveats must be considered. In conclusion I find that expert- or theory-driven model building might benefit from some more checking against the data and causal discovery could bring new ideas into old theories.

2.
Sci Rep ; 14(1): 6822, 2024 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-38514750

RESUMEN

Childhood obesity is a complex disorder that appears to be influenced by an interacting system of many factors. Taking this complexity into account, we aim to investigate the causal structure underlying childhood obesity. Our focus is on identifying potential early, direct or indirect, causes of obesity which may be promising targets for prevention strategies. Using a causal discovery algorithm, we estimate a cohort causal graph (CCG) over the life course from childhood to adolescence. We adapt a popular method, the so-called PC-algorithm, to deal with missing values by multiple imputation, with mixed discrete and continuous variables, and that takes background knowledge such as the time-structure of cohort data into account. The algorithm is then applied to learn the causal structure among 51 variables including obesity, early life factors, diet, lifestyle, insulin resistance, puberty stage and cultural background of 5112 children from the European IDEFICS/I.Family cohort across three waves (2007-2014). The robustness of the learned causal structure is addressed in a series of alternative and sensitivity analyses; in particular, we use bootstrap resamples to assess the stability of aspects of the learned CCG. Our results suggest some but only indirect possible causal paths from early modifiable risk factors, such as audio-visual media consumption and physical activity, to obesity (measured by age- and sex-adjusted BMI z-scores) 6 years later.


Asunto(s)
Resistencia a la Insulina , Obesidad Infantil , Humanos , Niño , Adolescente , Obesidad Infantil/epidemiología , Estudios Longitudinales , Factores de Riesgo , Dieta , Índice de Masa Corporal
3.
Int J Behav Nutr Phys Act ; 21(1): 1, 2024 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-38169385

RESUMEN

BACKGROUND: It is unclear whether a hypothetical intervention targeting either psychosocial well-being or emotion-driven impulsiveness is more effective in reducing unhealthy food choices. Therefore, we aimed to compare the (separate) causal effects of psychosocial well-being and emotion-driven impulsiveness on European adolescents' sweet and fat propensity. METHODS: We included 2,065 participants of the IDEFICS/I.Family cohort (mean age: 13.4) providing self-reported data on sweet propensity (score range: 0 to 68.4), fat propensity (range: 0 to 72.6), emotion-driven impulsiveness using the UPPS-P negative urgency subscale, and psychosocial well-being using the KINDLR Questionnaire. We estimated, separately, the average causal effects of psychosocial well-being and emotion-driven impulsiveness on sweet and fat propensity applying a semi-parametric doubly robust method (targeted maximum likelihood estimation). Further, we investigated a potential indirect effect of psychosocial well-being on sweet and fat propensity mediated via emotion-driven impulsiveness using a causal mediation analysis. RESULTS: If all adolescents, hypothetically, had high levels of psychosocial well-being, compared to low levels, we estimated a decrease in average sweet propensity by 1.43 [95%-confidence interval: 0.25 to 2.61]. A smaller effect was estimated for fat propensity. Similarly, if all adolescents had high levels of emotion-driven impulsiveness, compared to low levels, average sweet propensity would be decreased by 2.07 [0.87 to 3.26] and average fat propensity by 1.85 [0.81 to 2.88]. The indirect effect of psychosocial well-being via emotion-driven impulsiveness was 0.61 [0.24 to 1.09] for average sweet propensity and 0.55 [0.13 to 0.86] for average fat propensity. CONCLUSIONS: An intervention targeting emotion-driven impulsiveness, compared to psychosocial well-being, would be marginally more effective in reducing sweet and fat propensity in adolescents.


Asunto(s)
Preferencias Alimentarias , Gusto , Humanos , Adolescente , Encuestas y Cuestionarios , Autoinforme , Emociones
4.
Biom J ; 66(1): e2200209, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37643390

RESUMEN

We consider the question of variable selection in linear regressions, in the sense of identifying the correct direct predictors (those variables that have nonzero coefficients given all candidate predictors). Best subset selection (BSS) is often considered the "gold standard," with its use being restricted only by its NP-hard nature. Alternatives such as the least absolute shrinkage and selection operator (Lasso) or the Elastic net (Enet) have become methods of choice in high-dimensional settings. A recent proposal represents BSS as a mixed-integer optimization problem so that large problems have become computationally feasible. We present an extensive neutral comparison assessing the ability to select the correct direct predictors of BSS compared to forward stepwise selection (FSS), Lasso, and Enet. The simulation considers a range of settings that are challenging regarding dimensionality (number of observations and variables), signal-to-noise ratios, and correlations between predictors. As fair measure of performance, we primarily used the best possible F1-score for each method, and results were confirmed by alternative performance measures and practical criteria for choosing the tuning parameters and subset sizes. Surprisingly, it was only in settings where the signal-to-noise ratio was high and the variables were uncorrelated that BSS reliably outperformed the other methods, even in low-dimensional settings. Furthermore, FSS performed almost identically to BSS. Our results shed new light on the usual presumption of BSS being, in principle, the best choice for selecting the correct direct predictors. Especially for correlated variables, alternatives like Enet are faster and appear to perform better in practical settings.


Asunto(s)
Modelos Lineales , Simulación por Computador
5.
J Clin Epidemiol ; 160: 100-109, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37343895

RESUMEN

OBJECTIVES: Epidemiological studies often have missing data, which are commonly handled by multiple imputation (MI). Standard (default) MI procedures use simple linear covariate functions in the imputation model. We examine the bias that may be caused by acceptance of this default option and evaluate methods to identify problematic imputation models, providing practical guidance for researchers. STUDY DESIGN AND SETTING: Using simulation and real data analysis, we investigated how imputation model mis-specification affected MI performance, comparing results with complete records analysis (CRA). We considered scenarios in which imputation model mis-specification occurred because (i) the analysis model was mis-specified or (ii) the relationship between exposure and confounder was mis-specified. RESULTS: Mis-specification of the relationship between outcome and exposure, or between exposure and confounder, can cause biased CRA and MI estimates (in addition to any bias in the full-data estimate due to analysis model mis-specification). MI by predictive mean matching can mitigate model mis-specification. Methods for examining model mis-specification were effective in identifying mis-specified relationships. CONCLUSION: When using MI methods that assume data are MAR, compatibility between the analysis and imputation models is necessary, but not sufficient to avoid bias. We propose a step-by-step procedure for identifying and correcting mis-specification of imputation models.


Asunto(s)
Análisis de Datos , Proyectos de Investigación , Humanos , Interpretación Estadística de Datos , Simulación por Computador , Sesgo
6.
Am J Epidemiol ; 192(8): 1415-1423, 2023 08 04.
Artículo en Inglés | MEDLINE | ID: mdl-37139580

RESUMEN

Studying causal exposure effects on dementia is challenging when death is a competing event. Researchers often interpret death as a potential source of bias, although bias cannot be defined or assessed if the causal question is not explicitly specified. Here we discuss 2 possible notions of a causal effect on dementia risk: the "controlled direct effect" and the "total effect." We provide definitions and discuss the "censoring" assumptions needed for identification in either case and their link to familiar statistical methods. We illustrate concepts in a hypothetical randomized trial on smoking cessation in late midlife, and emulate such a trial using observational data from the Rotterdam Study, the Netherlands, 1990-2015. We estimated a total effect of smoking cessation (compared with continued smoking) on 20-year dementia risk of 2.1 (95% confidence interval: -0.1, 4.2) percentage points and a controlled direct effect of smoking cessation on 20-year dementia risk had death been prevented of -2.7 (95% confidence interval: -6.1, 0.8) percentage points. Our study highlights how analyses corresponding to different causal questions can have different results, here with point estimates on opposite sides of the null. Having a clear causal question in view of the competing event and transparent and explicit assumptions are essential to interpreting results and potential bias.


Asunto(s)
Demencia , Cese del Hábito de Fumar , Humanos , Fumar/efectos adversos , Fumar/epidemiología , Objetivos , Causalidad , Cese del Hábito de Fumar/métodos , Demencia/epidemiología
7.
J Gerontol A Biol Sci Med Sci ; 78(7): 1172-1178, 2023 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-36869806

RESUMEN

BACKGROUND: An important epidemiological question is understanding how vascular risk factors contribute to cognitive impairment. Using data from the Cardiovascular Health Cognition Study, we investigated how subclinical cardiovascular disease (sCVD) relates to cognitive impairment risk and the extent to which the hypothesized risk is mediated by the incidence of clinically manifested cardiovascular disease (CVD), both overall and within apolipoprotein E-4 (APOE-4) subgroups. METHODS: We adopted a novel "separable effects" causal mediation framework that assumes that sCVD has separably intervenable atherosclerosis-related components. We then ran several mediation models, adjusting for key covariates. RESULTS: We found that sCVD increased overall risk of cognitive impairment (risk ratio [RR] = 1.21, 95% confidence interval [CI]: 1.03, 1.44); however, there was little or no mediation by incident clinically manifested CVD (indirect effect RR = 1.02, 95% CI: 1.00, 1.03). We also found attenuated effects among APOE-4 carriers (total effect RR = 1.09, 95% CI: 0.81, 1.47; indirect effect RR = 0.99, 95% CI: 0.96, 1.01) and stronger findings among noncarriers (total effect RR = 1.29, 95% CI: 1.05, 1.60; indirect effect RR = 1.02, 95% CI: 1.00, 1.05). In secondary analyses restricting cognitive impairment to only incident dementia cases, we found similar effect patterns. CONCLUSIONS: We found that the effect of sCVD on cognitive impairment does not seem to be mediated by CVD, both overall and within APOE-4 subgroups. Our results were critically assessed via sensitivity analyses, and they were found to be robust. Future work is needed to fully understand the relationship between sCVD, CVD, and cognitive impairment.


Asunto(s)
Enfermedades Cardiovasculares , Disfunción Cognitiva , Humanos , Enfermedades Cardiovasculares/epidemiología , Enfermedades Cardiovasculares/etiología , Disfunción Cognitiva/epidemiología , Factores de Riesgo , Cognición , Apolipoproteína E4/genética
8.
Clin Epidemiol ; 14: 1293-1303, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36353307

RESUMEN

Background: The efficacy of mammography screening in reducing breast cancer mortality has been demonstrated in randomized trials. However, treatment options - and hence prognosis - for advanced tumor stages as well as mammography techniques have considerably improved since completion of these trials. Consequently, the effectiveness of mammography screening under current conditions is unclear and controversial. The German mammography screening program (MSP), an organized population-based screening program, was gradually introduced between 2005 and 2008 and achieved nation-wide coverage in 2009. Objective: We describe in detail a study protocol for investigating the effectiveness of the German MSP in reducing breast cancer mortality in women aged 50 to 69 years based on health claims data. Specifically, the proposed study aims at estimating per-protocol effects of several screening strategies on cumulative breast cancer mortality. The first analysis will be conducted once 10-year follow-up data are available. Methods and Analysis: We will use claims data from five statutory health insurance providers in Germany, covering approximately 37.6 million individuals. To estimate the effectiveness of the MSP, hypothetical target trials will be emulated across time, an approach that has been demonstrated to minimize design-related biases. Specifically, the primary contrast will be in terms of the cumulative breast cancer mortality comparing the screening strategies of "never screen" versus "regular screening as intended by the MSP". Ethics and Dissemination: In Germany, the utilization of data from health insurances for scientific research is regulated by the Code of Social Law. All involved health insurance providers as well as the responsible authorities approved the use of the health claims data for this study. The Ethics Committee of the University of Bremen determined that studies based on claims data are exempt from institutional review. The findings of the proposed study will be published in peer-reviewed journals.

9.
Stat Med ; 41(23): 4716-4743, 2022 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-35908775

RESUMEN

Causal discovery algorithms estimate causal graphs from observational data. This can provide a valuable complement to analyses focusing on the causal relation between individual treatment-outcome pairs. Constraint-based causal discovery algorithms rely on conditional independence testing when building the graph. Until recently, these algorithms have been unable to handle missing values. In this article, we investigate two alternative solutions: test-wise deletion and multiple imputation. We establish necessary and sufficient conditions for the recoverability of causal structures under test-wise deletion, and argue that multiple imputation is more challenging in the context of causal discovery than for estimation. We conduct an extensive comparison by simulating from benchmark causal graphs: as one might expect, we find that test-wise deletion and multiple imputation both clearly outperform list-wise deletion and single imputation. Crucially, our results further suggest that multiple imputation is especially useful in settings with a small number of either Gaussian or discrete variables, but when the dataset contains a mix of both neither method is uniformly best. The methods we compare include random forest imputation and a hybrid procedure combining test-wise deletion and multiple imputation. An application to data from the IDEFICS cohort study on diet- and lifestyle-related diseases in European children serves as an illustrating example.


Asunto(s)
Algoritmos , Proyectos de Investigación , Causalidad , Niño , Estudios de Cohortes , Humanos
10.
Int J Epidemiol ; 51(6): 1899-1909, 2022 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-35848950

RESUMEN

BACKGROUND: Mendelian randomization (MR) is a powerful tool through which the causal effects of modifiable exposures on outcomes can be estimated from observational data. Most exposures vary throughout the life course, but MR is commonly applied to one measurement of an exposure (e.g. weight measured once between ages 40 and 60 years). It has been argued that MR provides biased causal effect estimates when applied to one measure of an exposure that varies over time. METHODS: We propose an approach that emphasizes the liability that causes the entire exposure trajectory. We demonstrate this approach using simulations and an applied example. RESULTS: We show that rather than estimating the direct or total causal effect of changing the exposure value at a given time, MR estimates the causal effect of changing the underlying liability for the exposure, scaled to the effect of the liability on the exposure at that time. As such, results from MR conducted at different time points are expected to differ (unless the effect of the liability on exposure is constant over time), as we illustrate by estimating the effect of body mass index measured at different ages on systolic blood pressure. CONCLUSION: Univariable MR results should not be interpreted as time-point-specific direct or total causal effects, but as the effect of changing the liability for the exposure. Estimates of how the effects of a genetic variant on an exposure vary over time, together with biological knowledge that provides evidence regarding likely effective exposure periods, are required to interpret time-point-specific causal effects.


Asunto(s)
Análisis de la Aleatorización Mendeliana , Humanos , Adulto , Persona de Mediana Edad , Análisis de la Aleatorización Mendeliana/métodos , Índice de Masa Corporal , Presión Sanguínea/genética , Causalidad
11.
J Clin Epidemiol ; 149: 118-126, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35680106

RESUMEN

OBJECTIVES: We aimed to evaluate the effectiveness of screening colonoscopy in reducing incidence of distal vs. proximal colorectal cancer (CRC) in persons aged 55-69 years. STUDY DESIGN AND SETTING: Using observational data from a German claims database (German Pharmacoepidemiological Research Database), we emulated a target trial with two arms: Colonoscopy screening vs. no-screening at baseline. Adjusted cumulative incidence of total, distal, and proximal CRC over 11 years of follow-up was estimated in 55-69-year-olds at an average CRC risk and without colonoscopy, polypectomy, or fecal occult blood test before baseline. RESULTS: Overall, 307,158 persons were included (screening arm: 198,389 and control arm: 117,399). The adjusted 11-year risk of any CRC was 1.62% in the screening group and 2.38% in the no-screening group resulting in a relative risk of 0.68 (95% CI: 0.63-0.73). The relative risk was 0.67 for distal CRC (95% CI: 0.62-0.73) and 0.70 (95% CI: 0.63-0.79) for proximal CRC. The cumulative incidence curves of the groups crossed after 6.7 (distal CRC) and 5.0 years (proximal CRC). CONCLUSION: Our results suggest that colonoscopy is effective in preventing distal and proximal CRC. Unlike previous studies not using a target trial approach, we found no relevant difference in the effectiveness by location.


Asunto(s)
Neoplasias Colorrectales , Detección Precoz del Cáncer , Humanos , Colonoscopía , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/epidemiología , Neoplasias Colorrectales/prevención & control , Detección Precoz del Cáncer/métodos , Tamizaje Masivo/métodos , Sangre Oculta , Estudios Prospectivos , Persona de Mediana Edad , Anciano
12.
Biom J ; 64(2): 235-242, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-33576019

Asunto(s)
Lógica
13.
Clin Epidemiol ; 13: 1027-1038, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34737647

RESUMEN

PURPOSE: Investigating intended or unintended effects of sustained drug use is of high clinical relevance but remains methodologically challenging. This feasibility study aims to evaluate the usefulness of the parametric g-formula within a target trial for application to an extensive healthcare database in order to address various sources of time-related biases and time-dependent confounding. PATIENTS AND METHODS: Based on the German Pharmacoepidemiological Research Database (GePaRD), we estimated the pancreatic cancer incidence comparing two hypothetical treatment strategies for type 2 diabetes mellitus (T2DM), i.e., (A) sustained metformin monotherapy vs (B) combination therapy with DPP-4 inhibitors after one year metformin monotherapy. We included 77,330 persons with T2DM who started metformin therapy at baseline between 2005 and 2011. Key aspects for avoiding time-related biases and time-dependent confounding were the emulation of a target trial over a 7-year follow-up period and application of the parametric g-formula. RESULTS: Over the 7-year follow-up period, 652 out of the 77,330 study subjects had a diagnosis of pancreatic cancer. Assuming no unobserved confounding, we found evidence that the metformin/DPP-4i combination therapy increased the risk of pancreatic cancer compared to a sustained metformin monotherapy (risk ratio: 1.47; 95% bootstrap CI: 1.07-1.94). The risk ratio decreased in sensitivity analyses addressing protopathic bias. CONCLUSION: While protopathic bias could not fully be ruled out, and computational challenges necessitated compromises in the analysis, the g-formula and target trial emulation proved useful: Self-inflicted biases were avoided, observed time-varying confounding was adjusted for, and the estimated risks have a clear causal interpretation.

14.
Gesundheitswesen ; 83(S 02): S69-S76, 2021 Nov.
Artículo en Alemán | MEDLINE | ID: mdl-34695869

RESUMEN

Studies using secondary data such as health care claims data are often faced with methodological challenges due to the time-dependence of key quantities or unmeasured confounding. In the present paper, we discuss approaches to avoid or suitably address various sources of potential bias. In particular, we illustrate the target trial principle, marginal structural models, and instrumental variables with examples from the "GePaRD" database. Finally, we discuss the strengths and limitations of record linkage which can sometimes be used to supply missing information.


Asunto(s)
Atención a la Salud , Farmacoepidemiología , Sesgo , Bases de Datos Factuales , Alemania/epidemiología
15.
Biometrics ; 77(4): 1165-1169, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34510405

RESUMEN

Huang proposes a method for assessing the impact of a point treatment on mortality either directly or mediated by occurrence of a nonterminal health event, based on data from a prospective cohort study in which the occurrence of the nonterminal health event may be preemptied by death but not vice versa. The author uses a causal mediation framework to formally define causal quantities known as natural (in)direct effects. The novelty consists of adapting these concepts to a continuous-time modeling framework based on counting processes. In an effort to posit "scientifically interpretable estimands," statistical and causal assumptions are introduced for identification. In this commentary, we argue that these assumptions are not only difficult to interpret and justify, but are also likely violated in the hepatitis B motivating example and other survival/time to event settings as well.


Asunto(s)
Modelos Estadísticos , Causalidad , Humanos , Estudios Prospectivos
16.
Lifetime Data Anal ; 27(4): 588-631, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34468923

RESUMEN

In competing event settings, a counterfactual contrast of cause-specific cumulative incidences quantifies the total causal effect of a treatment on the event of interest. However, effects of treatment on the competing event may indirectly contribute to this total effect, complicating its interpretation. We previously proposed the separable effects to define direct and indirect effects of the treatment on the event of interest. This definition was given in a simple setting, where the treatment was decomposed into two components acting along two separate causal pathways. Here we generalize the notion of separable effects, allowing for interpretation, identification and estimation in a wide variety of settings. We propose and discuss a definition of separable effects that is applicable to general time-varying structures, where the separable effects can still be meaningfully interpreted as effects of modified treatments, even when they cannot be regarded as direct and indirect effects. For these settings we derive weaker conditions for identification of separable effects in studies where decomposed, or otherwise modified, treatments are not yet available; in particular, these conditions allow for time-varying common causes of the event of interest, the competing events and loss to follow-up. We also propose semi-parametric weighted estimators that are straightforward to implement. We stress that unlike previous definitions of direct and indirect effects, the separable effects can be subject to empirical scrutiny in future studies.


Asunto(s)
Causalidad , Humanos , Incidencia
17.
Epidemiology ; 32(2): 209-219, 2021 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-33512846

RESUMEN

Causal mediation analysis is a useful tool for epidemiologic research, but it has been criticized for relying on a "cross-world" independence assumption that counterfactual outcome and mediator values are independent even in causal worlds where the exposure assignments for the outcome and mediator differ. This assumption is empirically difficult to verify and problematic to justify based on background knowledge. In the present article, we aim to assist the applied researcher in understanding this assumption. Synthesizing what is known about the cross-world independence assumption, we discuss the relationship between assumptions for causal mediation analyses, causal models, and nonparametric identification of natural direct and indirect effects. In particular, we give a practical example of an applied setting where the cross-world independence assumption is violated even without any post-treatment confounding. Further, we review possible alternatives to the cross-world independence assumption, including the use of bounds that avoid the assumption altogether. Finally, we carry out a numeric study in which the cross-world independence assumption is violated to assess the ensuing bias in estimating natural direct and indirect effects. We conclude with recommendations for carrying out causal mediation analyses.


Asunto(s)
Análisis de Mediación , Modelos Estadísticos , Sesgo , Causalidad , Humanos
18.
Biom J ; 62(3): 532-549, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-30779372

RESUMEN

We discuss causal mediation analyses for survival data and propose a new approach based on the additive hazards model. The emphasis is on a dynamic point of view, that is, understanding how the direct and indirect effects develop over time. Hence, importantly, we allow for a time varying mediator. To define direct and indirect effects in such a longitudinal survival setting we take an interventional approach (Didelez, 2018) where treatment is separated into one aspect affecting the mediator and a different aspect affecting survival. In general, this leads to a version of the nonparametric g-formula (Robins, 1986). In the present paper, we demonstrate that combining the g-formula with the additive hazards model and a sequential linear model for the mediator process results in simple and interpretable expressions for direct and indirect effects in terms of relative survival as well as cumulative hazards. Our results generalize and formalize the method of dynamic path analysis (Fosen, Ferkingstad, Borgan, & Aalen, 2006; Strohmaier et al., 2015). An application to data from a clinical trial on blood pressure medication is given.


Asunto(s)
Biometría/métodos , Modelos Estadísticos , Presión Sanguínea/efectos de los fármacos , Ensayos Clínicos como Asunto , Humanos , Análisis de Supervivencia
19.
Hum Genet ; 139(1): 121-136, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31134333

RESUMEN

In the current era, with increasing availability of results from genetic association studies, finding genetic instruments for inferring causality in observational epidemiology has become apparently simple. Mendelian randomisation (MR) analyses are hence growing in popularity and, in particular, methods that can incorporate multiple instruments are being rapidly developed for these applications. Such analyses have enormous potential, but they all rely on strong, different, and inherently untestable assumptions. These have to be clearly stated and carefully justified for every application in order to avoid conclusions that cannot be replicated. In this article, we review the instrumental variable assumptions and discuss the popular linear additive structural model. We advocate the use of tests for the null hypothesis of 'no causal effect' and calculation of the bounds for a causal effect, whenever possible, as these do not rely on parametric modelling assumptions. We clarify the difference between a randomised trial and an MR study and we comment on the importance of validating instruments, especially when considering them for joint use in an analysis. We urge researchers to stand by their convictions, if satisfied that the relevant assumptions hold, and to interpret their results causally since that is the only reason for performing an MR analysis in the first place.


Asunto(s)
Variación Genética , Estudio de Asociación del Genoma Completo , Análisis de la Aleatorización Mendeliana/métodos , Epidemiología Molecular/métodos , Humanos
20.
Genet Epidemiol ; 43(4): 373-401, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30635941

RESUMEN

In Mendelian randomization (MR), inference about causal relationship between a phenotype of interest and a response or disease outcome can be obtained by constructing instrumental variables from genetic variants. However, MR inference requires three assumptions, one of which is that the genetic variants only influence the outcome through phenotype of interest. Pleiotropy, that is, the situation in which some genetic variants affect more than one phenotype, can invalidate these genetic variants for use as instrumental variables; thus a naive analysis will give biased estimates of the causal relation. Here, we present new methods (constrained instrumental variable [CIV] methods) to construct valid instrumental variables and perform adjusted causal effect estimation when pleiotropy exists and when the pleiotropic phenotypes are available. We demonstrate that a smoothed version of CIV performs approximate selection of genetic variants that are valid instruments, and provides unbiased estimates of the causal effects. We provide details on a number of existing methods, together with a comparison of their performance in a large series of simulations. CIV performs robustly across different pleiotropic violations of the MR assumptions. We also analyzed the data from the Alzheimer's disease (AD) neuroimaging initiative (ADNI; Mueller et al., 2005. Alzheimer's Dementia, 11(1), 55-66) to disentangle causal relationships of several biomarkers with AD progression.


Asunto(s)
Pleiotropía Genética/fisiología , Análisis de la Aleatorización Mendeliana/métodos , Algoritmos , Factores de Confusión Epidemiológicos , Estudios de Asociación Genética , Variación Genética , Humanos , Modelos Genéticos , Fenotipo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...